370,347 research outputs found

    Multitask Learning Deep Neural Networks to Combine Revealed and Stated Preference Data

    Full text link
    It is an enduring question how to combine revealed preference (RP) and stated preference (SP) data to analyze travel behavior. This study presents a framework of multitask learning deep neural networks (MTLDNNs) for this question, and demonstrates that MTLDNNs are more generic than the traditional nested logit (NL) method, due to its capacity of automatic feature learning and soft constraints. About 1,500 MTLDNN models are designed and applied to the survey data that was collected in Singapore and focused on the RP of four current travel modes and the SP with autonomous vehicles (AV) as the one new travel mode in addition to those in RP. We found that MTLDNNs consistently outperform six benchmark models and particularly the classical NL models by about 5% prediction accuracy in both RP and SP datasets. This performance improvement can be mainly attributed to the soft constraints specific to MTLDNNs, including its innovative architectural design and regularization methods, but not much to the generic capacity of automatic feature learning endowed by a standard feedforward DNN architecture. Besides prediction, MTLDNNs are also interpretable. The empirical results show that AV is mainly the substitute of driving and AV alternative-specific variables are more important than the socio-economic variables in determining AV adoption. Overall, this study introduces a new MTLDNN framework to combine RP and SP, and demonstrates its theoretical flexibility and empirical power for prediction and interpretation. Future studies can design new MTLDNN architectures to reflect the speciality of RP and SP and extend this work to other behavioral analysis

    On the power divergence in quasi gluon distribution function

    Full text link
    Recent perturbative calculation of quasi gluon distribution function at one-loop level shows the existence of extra linear ultraviolet divergences in the cut-off scheme. We employ the auxiliary field approach, and study the renormalization of gluon operators. The non-local gluon operator can mix with new operators under renormalization, and the linear divergences in quasi distribution function can be into the newly introduced operators. After including the mixing, we find the improved quasi gluon distribution functions contain only logarithmic divergences, and thus can be used to extract the gluon distribution in large momentum effective theory.Comment: 18 pages, 10 figures. Published version in JHE
    • …
    corecore